首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10551篇
  免费   48篇
  国内免费   138篇
电工技术   130篇
综合类   4篇
化学工业   391篇
金属工艺   438篇
机械仪表   53篇
建筑科学   74篇
矿业工程   3篇
能源动力   38篇
轻工业   85篇
水利工程   21篇
石油天然气   8篇
无线电   327篇
一般工业技术   172篇
冶金工业   82篇
原子能技术   91篇
自动化技术   8820篇
  2024年   1篇
  2023年   8篇
  2022年   5篇
  2021年   28篇
  2020年   11篇
  2019年   29篇
  2018年   13篇
  2017年   11篇
  2016年   22篇
  2015年   18篇
  2014年   224篇
  2013年   201篇
  2012年   789篇
  2011年   3090篇
  2010年   1154篇
  2009年   1041篇
  2008年   701篇
  2007年   601篇
  2006年   475篇
  2005年   583篇
  2004年   542篇
  2003年   598篇
  2002年   285篇
  2001年   12篇
  2000年   9篇
  1999年   31篇
  1998年   86篇
  1997年   22篇
  1996年   13篇
  1995年   6篇
  1994年   8篇
  1993年   2篇
  1992年   10篇
  1991年   6篇
  1990年   9篇
  1989年   8篇
  1988年   11篇
  1987年   6篇
  1986年   11篇
  1985年   6篇
  1984年   24篇
  1983年   11篇
  1982年   5篇
  1981年   8篇
  1978年   1篇
  1975年   2篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
In the design and analysis of any queueing system, one of the main objectives is to reduce congestion which can be achieved by controlling either arrival-rates or service-rates. This paper adopts the latter approach and analyzes a single-server finite-buffer queue where customers arrive according to the Poisson process and are served in batches of minimum size a with a maximum threshold limit b. The service times of the batches are arbitrarily distributed and depends on the size of the batches undergoing service. We obtain the joint distribution of the number of customers in the queue and the number with the server, and distributions of the number of customers in the queue, in the system, and the number with the server. Various performance measures such as the average number of customers in the queue (system) and with the server etc. are obtained. Several numerical results are presented in the form of tables and graphs and it is observed that batch-size-dependent service rule is more effective in reducing the congestion as compared to the one when service rates of the batches remain same irrespective of the size of the batch. This model has potential application in manufacturing, computer-communication network, telecommunication systems and group testing.  相似文献   
992.
In recent years, grid technology has had such a fast growth that it has been used in many scientific experiments and research centers. A large number of storage elements and computational resources are combined to generate a grid which gives us shared access to extra computing power. In particular, data grid deals with data intensive applications and provides intensive resources across widely distributed communities. Data replication is an efficient way for distributing replicas among the data grids, making it possible to access similar data in different locations of the data grid. Replication reduces data access time and improves the performance of the system. In this paper, we propose a new dynamic data replication algorithm named PDDRA that optimizes the traditional algorithms. Our proposed algorithm is based on an assumption: members in a VO (Virtual Organization) have similar interests in files. Based on this assumption and also file access history, PDDRA predicts future needs of grid sites and pre-fetches a sequence of files to the requester grid site, so the next time that this site needs a file, it will be locally available. This will considerably reduce access latency, response time and bandwidth consumption. PDDRA consists of three phases: storing file access patterns, requesting a file and performing replication and pre-fetching and replacement. The algorithm was tested using a grid simulator, OptorSim developed by European Data Grid projects. The simulation results show that our proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, effective network usage, total number of replications, hit ratio and percentage of storage filled.  相似文献   
993.
Scientific applications such as protein sequence analysis require a coordination of resources. This is due to hundreds and hundreds of protein sequences being deposited into data banks by the research community which results in an extensive database search when one wants to find a similar protein sequence. This search becomes easier and the time taken is reduced when it is conducted in a grid environment implemented using the Globus tool kit. This paper proposes the use of Bacteria Foraging Optimization (BFO) for finding similar protein sequences in the existing databases. Usage of BFO further reduces the time taken by a resource to execute the user’s requests. Also, the resources utilized in the proposed method are better balanced compared to the existing scheduling algorithms. Also, it is found that the number of tasks executed is more compared to the existing algorithms even though there is a fall in the execution of tasks as the number of resources increases which might be due to network failure etc. The proposed BFO has been compared with the existing First Come First Serve (FCFS) and Minimum Execution Time (MET) scheduling algorithms and it has been found that the proposed BFO performs well compared to the existing algorithms in terms of makespan, resource utilization and minimization in the case of non-execution of client requests.  相似文献   
994.
Creating simple marketplaces with common rules, that enable the dynamic selection and consumption of functionality, is the missing link to allow small businesses to enter the cloud, not only as consumers, but also as vendors. In this paper, we present the concepts behind a hybrid service and process repository that can act as the foundation for such a marketplace, as well as a prototype that allowed us to test various real-world scenarios. The advantage of a hybrid service and process repository is that, it not only holds a flat list of services, but also exposes a generic set of use cases, that it obtains information on how specific services can be used to implement the use cases as well as information to select services at run-time according to customer’s goal functions.  相似文献   
995.
Due to the large variety in computing resources and, consequently, the large number of different types of service level agreements (SLAs), computing resource markets face the problem of a low market liquidity. Restricting the number of different resource types to a small set of standardized computing resources seems to be the appropriate solution to counteract this problem. Standardized computing resources are defined through an SLA template. An SLA template defines the structure of an SLA, the service attributes, the names of the service attributes, and the service attribute values. However, since existing research results have only introduced static SLA templates so far, the SLA templates cannot reflect changes in user needs and market structures. To address this shortcoming, we present a novel approach of adaptive SLA matching. This approach adapts SLA templates based on SLA mappings of users. It allows Cloud users to define mappings between a public SLA template, which is available in the Cloud market, and their private SLA templates, which are used for various in-house business processes of the Cloud user. Besides showing how public SLA templates are adapted to the demand of Cloud users, we also analyze the costs and benefits of this approach. Costs are incurred every time a user has to define a new SLA mapping to a public SLA template due to its adaptation. In particular, we investigate how the costs differ with respect to the public SLA template adaptation method. The simulation results show that the use of heuristics within adaptation methods allows balancing the costs and benefits of the SLA mapping approach.  相似文献   
996.
It is increasingly common to see computer-based simulation being used as a vehicle to model and analyze business processes in relation to process management and improvement. While there are a number of business process management (BPM) and business process simulation (BPS) methodologies, approaches and tools available, it is more desirable to have a systemic BPS approach for operational decision support, from constructing process models based on historical data to simulating processes for typical and common problems. In this paper, we have proposed a generic approach of BPS for operational decision support which includes business processes modeling and workflow simulation with the models generated. Processes are modeled with event graphs through process mining from workflow logs that have integrated comprehensive information about the control-flow, data and resource aspects of a business process. A case study of a credit card application is presented to illustrate the steps involved in constructing an event graph. The evaluation detail is also given in terms of precision, generalization and robustness. Based on the event graph model constructed, we simulate the process under different scenarios and analyze the simulation logs for three generic problems in the case study: 1) suitable resource allocation plan for different case arrival rates; 2) teamwork performance under different case arrival rates; and 3) evaluation and prediction for personal performances. Our experimental results show that the proposed approach is able to model business processes using event graphs and simulate the processes for common operational decision support which collectively play an important role in process management and improvement.  相似文献   
997.
The current research investigates a single cost for cost-sensitive neural networks (CNN) for decision making. This may not be feasible for real cost-sensitive decisions which involve multiple costs. We propose to modify the existing model, the traditional back-propagation neural networks (TNN), by extending the back-propagation error equation for multiple cost decisions. In this multiple-cost extension, all costs are normalized to be in the same interval (i.e. between 0 and 1) as the error estimation generated in the TNN. A comparative analysis of accuracy dependent on three outcomes for constant costs was performed: (1) TNN and CNN with one constant cost (CNN-1C), (2) TNN and CNN with two constant costs (CNN-2C), and (3) CNN-1C and CNN-2C. A similar analysis for accuracy was also made for non-constant costs; (1) TNN and CNN with one non-constant cost (CNN-1NC), (2) TNN and CNN with two non-constant costs (CNN-2NC), and (3) CNN-1NC and CNN-2NC. Furthermore, we compared the misclassification cost for CNNs for both constant and non-constant costs (CNN-1C vs. CNN-2C and CNN-1NC vs. CNN-2NC). Our findings demonstrate that there is a competitive behavior between the accuracy and misclassification cost in the proposed CNN model. To obtain a higher accuracy and lower misclassification cost, our results suggest merging all constant cost matrices into one constant cost matrix for decision making. For multiple non-constant cost matrices, our results suggest maintaining separate matrices to enhance the accuracy and reduce the misclassification cost.  相似文献   
998.
The proliferation of the online business transaction has led to a large number of incidents of identity theft, which have incurred expensive costs to consumers and e-commerce industries. Fighting identity theft is important for both online business and consumers. Although the practical significance of fighting identity theft has been of great interest, empirical studies on identity theft are very limited. Drawing upon coping behavior theories, this study examines two types of coping behaviors to fight identity theft (i.e., conventional coping and technological coping). Following structural equation modeling approach, we test the model using data collected from 117 subjects through a survey. The results reveal that both conventional coping and technological coping are effective to defend against identity theft. Technological coping is determined by an individual's conventional coping, self-efficacy, perceived effectiveness of coping, and social influence. This study presents a timely empirical study on identity theft, and provides valuable insights for consumers, government agencies, and e-commerce industries.  相似文献   
999.
Business Process Re-engineering (BPR) is being used to improve the efficiency of the organizational processes, however, a number of obstacles have prevented its full potential from being realised. One of these obstacles is caused by an emphasis on the business process itself at the exclusion of considering other important knowledge of the organization. Another is due to the lack of tools for identifying the cause of the inefficiencies and inconsistencies in BPR. In this paper we propose a methodology for BPR that overcomes these two obstacles through the use of a formal organizational ontology and knowledge structure and source maps. These knowledge maps are represented formally to facilitate an inferencing mechanism which helps to automatically identify the causes of the inefficiencies and inconsistencies. We demonstrate the applicability of this methodology through the use of a case study of a university domain.  相似文献   
1000.
A key process for post-secondary educational institutions is the definition of course timetables and classroom assignments. Manual scheduling methods require enormous amounts of time and resources to deliver results of questionable quality, and multiple course and classroom conflicts usually occur. This article presents a scheduling system implemented in a Web environment. This system generates optimal schedules via an integer-programming model. Among its functionalities, this system enables direct interaction with instructors in order to gather data on their time availability for teaching courses. The results demonstrate that significant improvements over the typical fully manual process were obtained.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号